Research Article | Open Access
Volume 2022 |Article ID 9803570 | https://doi.org/10.34133/2022/9803570

SegVeg: Segmenting RGB Images into Green and Senescent Vegetation by Combining Deep and Shallow Methods

Mario Serouart iD ,1,2 Simon Madec,1,3 Etienne David,1,2,4 Kaaviya Velumani,4 Raul Lopez LozanoiD ,2 Marie WeissiD ,2 and Frédéric BaretiD 2

1Arvalis, Institut du végétal, 228, route de l’aérodrome - CS 40509, 84914 Avignon Cedex 9, France
2INRAE, Avignon Université, UMR EMMAH, UMT CAPTE, 228, route de l’aérodrome - CS 40509, 84914 Avignon Cedex 9, France
3CIRAD, UMR TETIS, F-34398 Montpellier, France
4Hiphen SAS, 228, route de l’aérodrome - CS 40509, 84914 Avignon Cedex 9, France

Received 
15 Mar 2022
Accepted 
26 Aug 2022
Published
12 Nov 2022

Abstract

Pixel segmentation of high-resolution RGB images into chlorophyll-active or nonactive vegetation classes is a first step often required before estimating key traits of interest. We have developed the SegVeg approach for semantic segmentation of RGB images into three classes (background, green, and senescent vegetation). This is achieved in two steps: A U-net model is first trained on a very large dataset to separate whole vegetation from background. The green and senescent vegetation pixels are then separated using SVM, a shallow machine learning technique, trained over a selection of pixels extracted from images. The performances of the SegVeg approach is then compared to a 3-class U-net model trained using weak supervision over RGB images segmented with SegVeg as groundtruth masks. Results show that the SegVeg approach allows to segment accurately the three classes. However, some confusion is observed mainly between the background and senescent vegetation, particularly over the dark and bright regions of the images. The U-net model achieves similar performances, with slight degradation over the green vegetation: the SVM pixel-based approach provides more precise delineation of the green and senescent patches as compared to the convolutional nature of U-net. The use of the components of several color spaces allows to better classify the vegetation pixels into green and senescent. Finally, the models are used to predict the fraction of three classes over whole images or regularly spaced grid-pixels. Results show that green fraction is very well estimated () by the SegVeg model, while the senescent and background fractions show slightly degraded performances (, respectively) with a mean 95% confidence error interval of 2.7% and 2.1% for the senescent vegetation and background, versus 1% for green vegetation. We have made SegVeg publicly available as a ready-to-use script and model, along with the entire annotated grid-pixels dataset. We thus hope to render segmentation accessible to a broad audience by requiring neither manual annotation nor knowledge or, at least, offering a pretrained model for more specific use.

© 2019-2023   Plant Phenomics. All rights Reserved.  ISSN 2643-6515.

Back to top